Talk by Surya Ganguli, UCSF, to the Redwood Center for Theoretical Neuroscience. Given on May 13, 2009 at UC Berkeley.
Critical cognitive phenomena such as planning and decision making rely on the ability of the brain to hold information in working memory. Many proposals exist for the maintenance of such memories in persistent activity that arises from stable fixed point attractors in the dynamics of recurrent neural networks. However such fixed points are incapable of storing temporal sequences of recent events. An alternate, and relatively less explored paradigm, is the storage of arbitrary temporal input sequences in the transient responses of a recurrent neural network. Such a paradigm raises a host of important questions. Are there any fundamental limits on the duration of such transient memory traces? How do these limits depend on the size of the network? What patterns of neural connectivity yield good performance on generic working memory tasks? To what extent do these traces degrade in the presence of noise? We combine Fisher information theory with dynamical systems theory to give precise answers to these questions for the class of all linear, and some nonlinear, neuronal networks. We uncover an important role for a special class of networks, known as nonnormal networks. Such networks are characterized by a (possibly hidden) feedforward structure, which is crucial for the maintenance of robust memory traces.